88 research outputs found
A Convex Approach to Hydrodynamic Analysis
We study stability and input-state analysis of three dimensional (3D)
incompressible, viscous flows with invariance in one direction. By taking
advantage of this invariance property, we propose a class of Lyapunov and
storage functionals. We then consider exponential stability, induced L2-norms,
and input-to-state stability (ISS). For streamwise constant flows, we formulate
conditions based on matrix inequalities. We show that in the case of polynomial
laminar flow profiles the matrix inequalities can be checked via convex
optimization. The proposed method is illustrated by an example of rotating
Couette flow.Comment: Preliminary version submitted to 54rd IEEE Conference on Decision and
Control, Dec. 15-18, 2015, Osaka, Japa
Verification of Uncertain POMDPs Using Barrier Certificates
We consider a class of partially observable Markov decision processes
(POMDPs) with uncertain transition and/or observation probabilities. The
uncertainty takes the form of probability intervals. Such uncertain POMDPs can
be used, for example, to model autonomous agents with sensors with limited
accuracy, or agents undergoing a sudden component failure, or structural damage
[1]. Given an uncertain POMDP representation of the autonomous agent, our goal
is to propose a method for checking whether the system will satisfy an optimal
performance, while not violating a safety requirement (e.g. fuel level,
velocity, and etc.). To this end, we cast the POMDP problem into a switched
system scenario. We then take advantage of this switched system
characterization and propose a method based on barrier certificates for
optimality and/or safety verification. We then show that the verification task
can be carried out computationally by sum-of-squares programming. We illustrate
the efficacy of our method by applying it to a Mars rover exploration example.Comment: 8 pages, 4 figure
Cost-Bounded Active Classification Using Partially Observable Markov Decision Processes
Active classification, i.e., the sequential decision-making process aimed at
data acquisition for classification purposes, arises naturally in many
applications, including medical diagnosis, intrusion detection, and object
tracking. In this work, we study the problem of actively classifying dynamical
systems with a finite set of Markov decision process (MDP) models. We are
interested in finding strategies that actively interact with the dynamical
system, and observe its reactions so that the true model is determined
efficiently with high confidence. To this end, we present a decision-theoretic
framework based on partially observable Markov decision processes (POMDPs). The
proposed framework relies on assigning a classification belief (a probability
distribution) to each candidate MDP model. Given an initial belief, some
misclassification probabilities, a cost bound, and a finite time horizon, we
design POMDP strategies leading to classification decisions. We present two
different approaches to find such strategies. The first approach computes the
optimal strategy "exactly" using value iteration. To overcome the computational
complexity of finding exact solutions, the second approach is based on adaptive
sampling to approximate the optimal probability of reaching a classification
decision. We illustrate the proposed methodology using two examples from
medical diagnosis and intruder detection
Identification of multiple-input single-output Hammerstein models using Bezier curves and Bernstein polynomials
AbstractThis paper considers the implementation of Bezier–Bernstein polynomials and the Levenberg–Marquart algorithm for identifying multiple-input single-output (MISO) Hammerstein models consisting of nonlinear static functions followed by a linear dynamical subsystem. The nonlinear static functions are approximated by the means of Bezier curves and Bernstein basis functions. The identification method is based on a hybrid scheme including the inverse de Casteljau algorithm, the least squares method, and the Levenberg–Marquart (LM) algorithm. Furthermore, results based on the proposed scheme are given which demonstrate substantial identification performance
Barrier Functions for Multiagent-POMDPs with DTL Specifications
Multi-agent partially observable Markov decision processes (MPOMDPs) provide a framework to represent heterogeneous autonomous agents subject to uncertainty and partial observation. In this paper, given a nominal policy provided by a human operator or a conventional planning method, we propose a technique based on barrier functions to design a minimally interfering safety-shield ensuring satisfaction of high-level specifications in terms of linear distribution temporal logic (LDTL). To this end, we use sufficient and necessary conditions for the invariance of a given set based on discrete-time barrier functions (DTBFs) and formulate sufficient conditions for finite time DTBF to study finite time convergence to a set. We then show that different LDTL mission/safety specifications can be cast as a set of invariance or finite time reachability problems. We demonstrate that the proposed method for safety-shield synthesis can be implemented online by a sequence of one-step greedy algorithms. We demonstrate the efficacy of the proposed method using experiments involving a team of robots
- …